Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix/refactor: atomicity and caching #347

Open
wants to merge 9 commits into
base: develop
Choose a base branch
from

Conversation

fominok
Copy link
Collaborator

@fominok fominok commented Dec 27, 2024

This PR addresses atomicity issues in GroveDB and introduces several improvements to the codebase.

Issue being fixed or feature implemented

  1. Even with batching, our writes weren’t truly atomic. While preparing a batch, some data fetches happen during the process. Without a transaction, there’s no guarantee that the fetched data will still match the actual data when the batch is committed. To fix this, we now ensure a transaction is always used if one isn’t already provided. This also made other StorageContext implementations unnecessary, so they were removed along with many now-unused macros.
  2. Writes in our workflow are deferred operations. While writes are executed at the end of an operation using a storage batch, intermediate "writes" are deferred until commit. Reading such deferred data diverges from storage, requiring us to access the in-memory Merk tree holding those changes. Managing multiple such instances without accidentally dropping them is complex, as reopening them would rely on storage that might have diverged from pending writes. To simplify this process, we introduced the MerkCache structure, allowing easy access to pending data while ensuring each Merk instance is opened only once.
  3. Extended the functionality of the path library, particularly for reference resolution functions. Since not all code has been updated yet, the old version remains in use alongside the new one.
  4. Resolved minor macro hygiene issues.

What was done?

  1. Added the TxRef type to unify logic for handling passed transactions and their absence.
  2. Introduced MerkCache and began selective usage. While not yet mandatory, this will be essential for feat: bidirectional references #104.
  3. Developed new versions of reference path resolution and following functions, though due to cost differences (new version uses cache and can cost less) the usage is postponed
  4. Performed general cleanup of the codebase.
  5. Added metadata storage to Merk

How Has This Been Tested?

  • Added tests for MerkCache and reference-related functions.
  • Verified changes with the existing GroveDB test suite.

Breaking Changes

None

Checklist

  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added or updated relevant unit/integration/functional/e2e tests.
  • I have made corresponding changes to the documentation.

For repository code-owners and collaborators only

  • I have assigned this pull request to a milestone.

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced methods for metadata management in the Merk struct, allowing retrieval, insertion, and deletion of metadata.
    • Added a new invert method to the ReferencePathType enum for handling various reference types.
    • Implemented pop_segment method in CompactBytes struct for segment management.
    • Enhanced SubtreePath and SubtreePathBuilder with new comparison and display capabilities.
  • Bug Fixes

    • Improved error handling and cost management across various methods by adjusting how costs are passed in error handling macros.
  • Refactor

    • Transitioned to a transactional context in multiple modules, including storage and operations, improving clarity and consistency in transaction management.
    • Consolidated error handling and cost calculation logic across several methods.
  • Documentation

    • Updated comments and documentation for clarity, particularly around transaction handling and storage context management.
  • Tests

    • Added new tests for metadata management and reference inversion functionality.
    • Updated existing tests to reflect changes in transaction handling and context retrieval.

Copy link
Contributor

coderabbitai bot commented Dec 27, 2024

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

I'll provide a comprehensive walkthrough of the changes based on the pull request summary.

Walkthrough

This pull request introduces significant architectural changes to the GroveDB ecosystem, focusing on enhancing transaction management, refining error handling, and improving code clarity. The primary modifications include transitioning from non-transactional to transactional storage contexts, removing deprecated error types, and streamlining cost management across various components. Key areas of change include transaction handling, metadata management, reference path handling, and overall code structure simplification.

Changes

File/Module Change Summary
costs/src/context.rs Added for_ok method to CostResult for side-effect operations
grovedb-version/src/version/ Added new version-related fields for GroveDB operations
grovedb/src/batch/ Updated batch processing with transactional contexts
grovedb/src/element/ Enhanced element handling, added new methods like get_with_value_hash
grovedb/src/reference_path.rs Added invert and absolute_qualified_path methods
merk/src/merk/meta.rs Introduced metadata management methods
path/ Enhanced path handling, added new traits and methods
storage/ Removed non-transactional storage contexts

Sequence Diagram

sequenceDiagram
    participant Client
    participant GroveDB
    participant Transaction
    participant StorageContext
    participant Merk

    Client->>GroveDB: Start Transaction
    GroveDB->>Transaction: Create Transaction
    Transaction-->>GroveDB: Transaction Reference
    
    Client->>GroveDB: Perform Operations
    GroveDB->>StorageContext: Execute Transactional Operations
    StorageContext->>Merk: Modify/Query Data
    
    Client->>GroveDB: Commit Transaction
    GroveDB->>Transaction: Commit Changes
    Transaction-->>StorageContext: Persist Changes
Loading

Poem

🐰 A Rabbit's Ode to Transactions 🌿

In GroveDB's realm of data and might,
Transactions dance with transactional light,
No more contexts without a trace,
Efficiency finds its rightful place!

Hop, hop, through paths both new and old,
A refactored tale of storage untold! 🚀


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@fominok fominok force-pushed the cleanup-non-transactional-contexts branch from 1977d07 to faeed02 Compare December 27, 2024 16:40
@fominok fominok force-pushed the cleanup-non-transactional-contexts branch from f9218dc to 70e0aa9 Compare December 27, 2024 17:04
@fominok
Copy link
Collaborator Author

fominok commented Dec 27, 2024

@coderabbitai review

Copy link
Contributor

coderabbitai bot commented Dec 27, 2024

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@fominok fominok marked this pull request as ready for review December 27, 2024 17:21
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (41)
merk/src/test_utils/temp_merk.rs (4)

52-53: Be cautious with 'static references for merk and tx.
Storing these fields with a 'static lifetime is acceptable for test utilities, but it can be risky if used in production settings. It can lead to unintended memory leaks or unsafe pointer usage.


63-63: Possible memory leak with Box::leak.
Leaking a Box to extend the transaction’s lifetime can be acceptable in a short-lived test context, but be mindful that it won’t be freed until process termination.


95-101: Unwrap without fallback in commit operations.
Calling commit_multi_context_batch and commit_transaction with unwrap().expect(...) is acceptable in test utilities but consider capturing or logging the error for investigative purposes.


124-130: Drop implementation commits and drops storage.
The double-commit approach ensures pending writes are flushed upon test exit. Ensure that ignoring the commit result (e.g. let _ = ...) is intentional and not hiding potential errors.

Do you want a new GitHub issue to track improved error handling in the drop logic?

grovedb/src/operations/auxiliary.rs (4)

36-36: Unified import of Storage, StorageBatch, StorageContext.
Consolidating imports is fine, but confirm no feature gating breaks references if “full” or other crate features are toggled off.

Also applies to: 39-39


67-67: Post-commit local transaction finalization.
map_ok(|_| tx.commit_local()) neatly finalizes local transactions, but ensure external transactions are not mistakenly committed.

Also applies to: 70-71


90-95: Check error handling for delete_aux.
The approach mirrors put_aux: the flow is correct. No immediate concerns beyond the uniform caution with unwrap_add_cost to handle potential storage context errors.

Also applies to: 98-98, 101-102


112-113: Introducing new batch in get_aux.
A separate StorageBatch is allocated although no writes occur. Consider reusing or skipping a batch altogether if read-only.

merk/src/merk/open.rs (1)

89-89: New meta_cache field in open_layered_with_root_key.
This is aligned with the broader shift to caching. Ensure that subsequent usage does not degrade performance due to repeated cache checks.

grovedb/src/util.rs (2)

27-34: TxRef::new: fallback to an owned transaction if none is provided.
This is a cleaner approach than macros. Keep an eye on usage in large code blocks to ensure partial transactions aren’t unexpectedly committed.


56-114: open_transactional_merk_at_path function.
Replaces macros for opening a transactional Merk instance. Overall logic is sound, but watch out for:

  • Re-building storage context for parent paths
  • Potential performance overhead in repeated calls
grovedb/src/merk_cache.rs (3)

16-17: type TxMerk<'db> = Merk<PrefixedRocksDbTransactionContext<'db>>.
Alias clarifies the nested type usage, though the code remains quite complex.


18-25: MerkCache struct with transaction references and a stored batch.
Aggregating these fields helps unify batch usage across multiple Merk instances. The 'db, 'b lifetimes are well-defined, but watch for conflicts if usage extends beyond single-scope operations.

🧰 Tools
🪛 GitHub Check: clippy

[warning] 24-24: very complex type used. Consider factoring parts into type definitions
warning: very complex type used. Consider factoring parts into type definitions
--> grovedb/src/merk_cache.rs:24:12
|
24 | merks: UnsafeCell<BTreeMap<SubtreePathBuilder<'b, B>, Box<(Cell, TxMerk<'db>)>>>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#type_complexity
= note: #[warn(clippy::type_complexity)] on by default


195-265: Unit tests for double borrowing and subtree propagation.
They cover essential correctness scenarios. Consider additional tests for concurrency or heavier loads to ensure the approach holds under stress.

grovedb/src/element/insert.rs (1)

223-223: Remove needless borrows
Clippy warns that these calls create references that are immediately dereferenced. You can safely remove the extra borrow.

- Self::get_optional(&merk, key, true, grove_version)
+ Self::get_optional(merk, key, true, grove_version)

Also applies to: 360-360, 514-514

🧰 Tools
🪛 GitHub Check: clippy

[warning] 223-223: this expression creates a reference which is immediately dereferenced by the compiler
warning: this expression creates a reference which is immediately dereferenced by the compiler
--> grovedb/src/element/insert.rs:223:32
|
223 | Self::get_optional(&merk, key, true, grove_version)
| ^^^^^ help: change this to: merk
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow

grovedb/src/lib.rs (2)

347-366: open_root_merk clarity
Short, easy-to-follow method that opens a transactional Merk at the root. Consider more descriptive errors if opening fails.


375-379: Root key behavior
When root_key returns None, consider adding a debug log to indicate an empty database state.

grovedb/src/reference_path.rs (1)

64-131: invert method coverage
This method correctly inverts multiple ReferencePathType variants—a nice approach. Consider extra tests for edge cases like empty arrays.

merk/src/merk/mod.rs (1)

192-192: KVIterator refinements
next_kv is extended to handle partial iteration. Consider clear error messaging if iteration fails or is past the end.

grovedb/src/operations/insert/mod.rs (1)

104-115: Consider reducing the function parameters
Clippy warns that this function has too many arguments (8/7). Consider grouping some parameters, or encapsulating them in a dedicated struct, to improve readability.

🧰 Tools
🪛 GitHub Check: clippy

[warning] 104-115: this function has too many arguments (8/7)
warning: this function has too many arguments (8/7)
--> grovedb/src/operations/insert/mod.rs:104:5
|
104 | / fn add_element_on_transaction<'db, 'b, 'c, B: AsRef<[u8]>>(
105 | | &'db self,
106 | | path: SubtreePath<'b, B>,
107 | | key: &[u8],
... |
114 | | grove_version: &GroveVersion,
115 | | ) -> CostResult<MerkHandle<'db, 'c>, Error> {
| |_______________________________________________^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#too_many_arguments

grovedb/src/operations/get/query.rs (2)

184-184: Confirm consistent parameter usage.
The function signature now accepts &Transaction instead of a more generic argument. Verify that broader references are not needed (e.g., for partial commits or advanced transactional functionality).


Line range hint 293-356: Resolve possible duplication.
This pattern with let tx = TxRef::new(...) and subsequent follow_reference calls appears frequently. Extract repeated logic into a helper function if it simplifies code or reduces duplication.

grovedb/src/batch/mod.rs (1)

2372-2373: Use TxRef carefully.
Creating a TxRef from a potentially reused transaction can cause confusion. Document the expected workflow for starting and committing or rolling back the transaction.

merk/src/merk/meta.rs (1)

11-22: Consider accepting &[u8] instead of Vec<u8> for get_meta.

Using a slice may avoid repeated allocations for callers that already have slices. Otherwise, the caching approach looks sound and efficient.

path/src/util/compact_bytes.rs (1)

34-34: Added Clone to CompactBytes
Allowing the structure to be cloned can simplify usage but be mindful of potentially expensive memory operations if these vectors become large.

grovedb/src/operations/delete/average_case.rs (2)

99-99: Consider descriptive error messages for missing subtrees.
Line 99 yields an error if intermediate layer info is missing. For debugging, you may want to include the height or relevant context in the error message to aid in diagnosing the missing subtree issue.


173-173: Evaluate necessity of get_raw call on non-tree.
When check_if_tree is true, the code calls add_average_case_get_raw_cost, which is heavier than a simpler existence check. If you already know the subtree type, consider whether a specialized call would yield better performance.

costs/src/context.rs (1)

182-184: Keep macro documentation synchronized.
The doc lines 182-184 mention using an external cost accumulator. Ensure that the macro usage examples and doc comments remain updated if the macro evolves further.

grovedb/src/operations/delete/delete_up_tree.rs (1)

139-139: Investigate stop path height logic
The check for stop_path_height on line 139 clarifies the error scenario, but consider verifying in upstream or earlier code to fail fast, especially if unintentional usage can happen.

grovedb-version/src/version/grovedb_versions.rs (1)

207-207: Consider additional test coverage for changed reference insertion.

The new feature version insert_reference_if_changed_value indicates that references are inserted only if a value has changed. Validate corner cases (e.g., references initially absent, references that remain unchanged) to ensure correctness.

grovedb/src/visualize.rs (2)

39-39: Check for potential performance bottlenecks.

Adding Storage to the import list is fine, but if the module is used heavily, keep an eye out for performance in large-scale visualizations. The overhead of repeated calls to storage contexts might grow if the dataset is large.


194-227: Transactional iteration approach is solid, but watch for partial reads.

When iterating elements within a transaction, ensure that the iteration will complete fully even if other writes/wrap-ups occur in parallel. For purely read-based visualization, this should be safe, but concurrency tests can be beneficial.

grovedb/src/tests/tree_hashes_tests.rs (3)

135-136: Add a rollback scenario to cover partial tree insertions.

Now that we have a transaction, consider testing a scenario where insertion is attempted but then rolled back, ensuring that subsequent queries reflect the rollback properly.


237-239: Check concurrency constraints for multi-level paths.

When opening the same transaction at multiple nested paths, confirm that the concurrency model (e.g., single-writer or multi-writer) is preserved. Potential locks or state collisions might arise.


270-272: Confirm correct path composition.

[TEST_LEAF, b"key1", b"key2"].as_ref().into() is forming a path with three segments. If the code evolves, watch for off-by-one or incorrect path references.

path/src/subtree_path_builder.rs (1)

109-109: Deriving Clone on SubtreePathRelative.

Enabling Clone for SubtreePathRelative is reasonable. Clone must be used carefully if large or deeply nested data is stored, to avoid unexpected overhead.

storage/src/rocksdb_storage/storage.rs (1)

275-275: Possible optimization for retrieval during delete
A retrieval is still needed to determine removed bytes. A future optimization may skip retrieving the full value if not strictly required.

grovedb/src/tests/mod.rs (3)

113-115: Clarify tree structure comments.
These descriptive comments for key-value pairs enhance the readability of the test setup.


121-121: Add rationale for shared naming convention.
It might be useful to briefly explain why these keys are repeated across different subtrees for consistent test coverage.


4146-4164: New ignored test for subtree reference restrictions.
This is a necessary test covering the scenario where subtrees cannot be directly referenced. Finalize it when the corresponding logic is fully implemented.

Would you like help updating or refining the test so it can be enabled?

path/Cargo.toml (1)

12-13: Check pinned dependency versions.
Using pinned versions is fine for reproducible builds, but consider whether a wider semver range (e.g., ^0.4 or ^0.13) might be desirable.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7f900ea and 70e0aa9.

📒 Files selected for processing (65)
  • costs/src/context.rs (3 hunks)
  • grovedb-version/src/lib.rs (5 hunks)
  • grovedb-version/src/version/grovedb_versions.rs (3 hunks)
  • grovedb-version/src/version/v1.rs (3 hunks)
  • grovedb/src/batch/estimated_costs/average_case_costs.rs (4 hunks)
  • grovedb/src/batch/estimated_costs/worst_case_costs.rs (3 hunks)
  • grovedb/src/batch/just_in_time_reference_update.rs (5 hunks)
  • grovedb/src/batch/mod.rs (15 hunks)
  • grovedb/src/debugger.rs (6 hunks)
  • grovedb/src/element/delete.rs (0 hunks)
  • grovedb/src/element/exists.rs (1 hunks)
  • grovedb/src/element/get.rs (8 hunks)
  • grovedb/src/element/helpers.rs (1 hunks)
  • grovedb/src/element/insert.rs (12 hunks)
  • grovedb/src/element/mod.rs (0 hunks)
  • grovedb/src/element/query.rs (28 hunks)
  • grovedb/src/element/serialize.rs (1 hunks)
  • grovedb/src/error.rs (1 hunks)
  • grovedb/src/estimated_costs/average_case_costs.rs (10 hunks)
  • grovedb/src/estimated_costs/worst_case_costs.rs (6 hunks)
  • grovedb/src/lib.rs (10 hunks)
  • grovedb/src/merk_cache.rs (1 hunks)
  • grovedb/src/operations/auxiliary.rs (6 hunks)
  • grovedb/src/operations/delete/average_case.rs (4 hunks)
  • grovedb/src/operations/delete/delete_up_tree.rs (2 hunks)
  • grovedb/src/operations/delete/mod.rs (25 hunks)
  • grovedb/src/operations/delete/worst_case.rs (4 hunks)
  • grovedb/src/operations/get/average_case.rs (1 hunks)
  • grovedb/src/operations/get/mod.rs (11 hunks)
  • grovedb/src/operations/get/query.rs (18 hunks)
  • grovedb/src/operations/get/worst_case.rs (1 hunks)
  • grovedb/src/operations/insert/mod.rs (11 hunks)
  • grovedb/src/operations/is_empty_tree.rs (2 hunks)
  • grovedb/src/operations/proof/generate.rs (11 hunks)
  • grovedb/src/operations/proof/verify.rs (1 hunks)
  • grovedb/src/query/mod.rs (1 hunks)
  • grovedb/src/reference_path.rs (11 hunks)
  • grovedb/src/replication.rs (6 hunks)
  • grovedb/src/tests/mod.rs (7 hunks)
  • grovedb/src/tests/sum_tree_tests.rs (19 hunks)
  • grovedb/src/tests/tree_hashes_tests.rs (5 hunks)
  • grovedb/src/util.rs (1 hunks)
  • grovedb/src/visualize.rs (2 hunks)
  • merk/src/merk/meta.rs (1 hunks)
  • merk/src/merk/mod.rs (19 hunks)
  • merk/src/merk/open.rs (8 hunks)
  • merk/src/merk/restore.rs (7 hunks)
  • merk/src/proofs/tree.rs (6 hunks)
  • merk/src/test_utils/mod.rs (2 hunks)
  • merk/src/test_utils/temp_merk.rs (6 hunks)
  • merk/src/tree/encoding.rs (1 hunks)
  • merk/src/tree/mod.rs (5 hunks)
  • merk/src/tree/ops.rs (2 hunks)
  • merk/src/tree/walk/mod.rs (4 hunks)
  • path/Cargo.toml (1 hunks)
  • path/src/subtree_path.rs (7 hunks)
  • path/src/subtree_path_builder.rs (8 hunks)
  • path/src/util/compact_bytes.rs (3 hunks)
  • path/src/util/cow_like.rs (1 hunks)
  • storage/src/rocksdb_storage.rs (1 hunks)
  • storage/src/rocksdb_storage/storage.rs (12 hunks)
  • storage/src/rocksdb_storage/storage_context.rs (0 hunks)
  • storage/src/rocksdb_storage/storage_context/context_no_tx.rs (0 hunks)
  • storage/src/rocksdb_storage/tests.rs (7 hunks)
  • storage/src/storage.rs (20 hunks)
💤 Files with no reviewable changes (4)
  • storage/src/rocksdb_storage/storage_context.rs
  • grovedb/src/element/mod.rs
  • grovedb/src/element/delete.rs
  • storage/src/rocksdb_storage/storage_context/context_no_tx.rs
✅ Files skipped from review due to trivial changes (4)
  • grovedb/src/operations/proof/verify.rs
  • grovedb/src/query/mod.rs
  • grovedb/src/operations/get/average_case.rs
  • grovedb/src/element/helpers.rs
🧰 Additional context used
🪛 GitHub Check: clippy
grovedb/src/merk_cache.rs

[warning] 174-174: the following explicit lifetimes could be elided: 'c
warning: the following explicit lifetimes could be elided: 'c
--> grovedb/src/merk_cache.rs:174:11
|
174 | impl<'db, 'c> MerkHandle<'db, 'c> {
| ^^ ^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_lifetimes
help: elide the lifetimes
|
174 - impl<'db, 'c> MerkHandle<'db, 'c> {
174 + impl<'db> MerkHandle<'db, '_> {
|


[warning] 24-24: very complex type used. Consider factoring parts into type definitions
warning: very complex type used. Consider factoring parts into type definitions
--> grovedb/src/merk_cache.rs:24:12
|
24 | merks: UnsafeCell<BTreeMap<SubtreePathBuilder<'b, B>, Box<(Cell, TxMerk<'db>)>>>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#type_complexity
= note: #[warn(clippy::type_complexity)] on by default

grovedb/src/operations/insert/mod.rs

[warning] 104-115: this function has too many arguments (8/7)
warning: this function has too many arguments (8/7)
--> grovedb/src/operations/insert/mod.rs:104:5
|
104 | / fn add_element_on_transaction<'db, 'b, 'c, B: AsRef<[u8]>>(
105 | | &'db self,
106 | | path: SubtreePath<'b, B>,
107 | | key: &[u8],
... |
114 | | grove_version: &GroveVersion,
115 | | ) -> CostResult<MerkHandle<'db, 'c>, Error> {
| |_______________________________________________^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#too_many_arguments

grovedb/src/operations/proof/generate.rs

[warning] 216-216: this expression creates a reference which is immediately dereferenced by the compiler
warning: this expression creates a reference which is immediately dereferenced by the compiler
--> grovedb/src/operations/proof/generate.rs:216:74
|
216 | self.open_transactional_merk_at_path(path.as_slice().into(), &tx, None, grove_version)
| ^^^ help: change this to: tx
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow

grovedb/src/element/insert.rs

[warning] 223-223: this expression creates a reference which is immediately dereferenced by the compiler
warning: this expression creates a reference which is immediately dereferenced by the compiler
--> grovedb/src/element/insert.rs:223:32
|
223 | Self::get_optional(&merk, key, true, grove_version)
| ^^^^^ help: change this to: merk
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow


[warning] 360-360: this expression creates a reference which is immediately dereferenced by the compiler
warning: this expression creates a reference which is immediately dereferenced by the compiler
--> grovedb/src/element/insert.rs:360:32
|
360 | Self::get_optional(&merk, &key, true, grove_version)
| ^^^^^ help: change this to: merk
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow


[warning] 514-514: this expression creates a reference which is immediately dereferenced by the compiler
warning: this expression creates a reference which is immediately dereferenced by the compiler
--> grovedb/src/element/insert.rs:514:32
|
514 | Self::get_optional(&merk, &key, true, grove_version)
| ^^^^^ help: change this to: merk
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow

🔇 Additional comments (356)
merk/src/test_utils/temp_merk.rs (6)

36-39: Ensure feature gating consistency for new imports.
These lines introduce or modify use statements under a cfg(feature = "full") block elsewhere. Verify that conditional imports are consistently applied across the codebase, and confirm that references to TempStorage or StorageBatch do not break when the full feature is toggled.


66-66: Check error handling of get_transactional_storage_context.
unwrap() is fine for testing, but consider using proper error handling or a context-specific message to help debug transaction retrieval issues in more complex scenarios.


81-81: Initialization of tx field looks correct.
This field is wired properly for reference in subsequent calls.


103-103: Reinitializing tx after commit.
Creating a new transaction after each commit is fine in ephemeral test usage. Ensure no incomplete operations rely on the old transaction.


106-106: Repeat usage of get_transactional_storage_context.
The approach correctly renews the context for the newly created transaction.


145-145: Deref and DerefMut to Merk<PrefixedRocksDbTransactionContext<'static>>.
Returning references to 'static-lifetime fields can work in these narrow test constraints, but be mindful that it’s not used in production contexts.

Also applies to: 147-147, 154-154

grovedb/src/operations/auxiliary.rs (6)

32-33: Use of cost_return_on_error macro.
Good practice in capturing and returning costs on errors. Ensure all edge cases for partial/wrapped results are covered.


52-52: Introduce a separate batch with StorageBatch::new().
This logic is beneficial for isolating changes and explicitly committing them. No immediate issues identified.


54-58: TxRef::new(&self.db, transaction) for explicit transaction handling.
This is a clear shift toward a more robust approach to managing transactions. Verify that you are neither reusing nor discarding partial transaction states inadvertently.

Also applies to: 59-64


83-84: Explicit transaction references in delete_aux.
Code appears consistent with put_aux usage. The transaction logic is now more transparent, which simplifies debugging.

Also applies to: 85-88


135-136: Transaction usage in find_subtrees.
Ensures consistent and atomic scanning. The logic is correct for skipping changes if none are introduced during iteration.


154-164: Nested iteration for subtrees.
The while let Some(...) = queue.pop() approach is straightforward for breadth-first or iterative DFS. No immediate issues. The transaction usage is correct for read operations.

merk/src/merk/open.rs (3)

25-25: Added meta_cache: Default::default(), in open_empty.
Enabling a default meta_cache can be beneficial for consistent initialization.


44-44: Same pattern in open_standalone.
Provides a consistent approach to meta_cache. No immediate issues.


66-66: open_base uses meta_cache: Default::default().
Maintaining a unified caching strategy across various open methods fosters code consistency.

grovedb/src/util.rs (8)

1-14: Refactored code from macros to function-based design.
Shifting away from macros generally improves readability and maintainability, especially for complex transaction logic.


15-20: Imports for merk_cache and reference_path.
Centralizes references to MAX_REFERENCE_HOPS and ReferencePathType. This is a good approach to decouple path resolution logic from direct storage calls.


22-24: New TxRef enum for transaction references.
Explicitly differentiating between owned and borrowed transactions helps avoid confusion. Confirm that no concurrency issues arise when multiple references occur under different code paths.


36-44: commit_local for TxRef.
This two-branch approach is a big improvement in clarity over macros. Perfect for preventing accidental commits.


47-53: AsRef<Transaction<'db>> trait for TxRef.
Provides a convenient way to pass around TxRef without requiring explicit pattern matching. Straightforward and flexible.


117-122: ResolvedReference struct for clearer reference resolution results.
Encapsulating reference metadata and the resolved target_merk fosters maintainability and clarity.


125-201: follow_reference implements multi-hop reference resolution.
Implementation includes a visited set for cycle detection. Be mindful of the performance cost if references become deeply nested.


207-260: follow_reference_once for single-level resolution.
This is simpler than follow_reference. The logic and error handling for cyclic references or empty references looks solid.

grovedb/src/merk_cache.rs (7)

1-2: New module introduction.
merk_cache.rs centralizes subtree caching logic, a good approach for repeated access in complex databases.


8-14: Public references to Merk and Transaction.
Exposed references highlight the underlying transaction logic. Makes the cache’s concurrency strategy more transparent.


27-41: MerkCache::new function.
Straightforward constructor sets up the environment for batch + transaction usage. The design ensures that the MerkCache can keep references consistently.


123-133: into_batch finalizes subtrees and returns the batch.
Safe pattern if no more MerkHandles exist. The call to propagate_subtrees ensures all subtree roots are updated. Confirm that partial usage does not occur after the batch is returned.


135-164: propagate_subtrees: bubble updated root hashes up to parents.
Uses pop_first to process nodes from deepest to shallowest. Implementation is sound, but watch large or heavily nested trees for performance.


167-172: MerkHandle struct to manage dynamic borrow.
Having a dedicated handle for unique borrowing is a good approach. The 'db, 'c lifetimes highlight potential expansions for concurrency control.


174-192: MerkHandle::for_merk restricts double mutable borrows.
The taken_handle flag is an effective guard to prevent multiple &mut references. The panic in case of violation is a suitable approach for internal logic.

🧰 Tools
🪛 GitHub Check: clippy

[warning] 174-174: the following explicit lifetimes could be elided: 'c
warning: the following explicit lifetimes could be elided: 'c
--> grovedb/src/merk_cache.rs:174:11
|
174 | impl<'db, 'c> MerkHandle<'db, 'c> {
| ^^ ^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_lifetimes
help: elide the lifetimes
|
174 - impl<'db, 'c> MerkHandle<'db, 'c> {
174 + impl<'db> MerkHandle<'db, '_> {
|

grovedb/src/operations/get/mod.rs (14)

12-13: Imports align with usage in this file
These imports from grovedb_costs appear to be used consistently for cost-tracking macros and function return types.


15-16: Imports for path and storage context
The additions from grovedb_path and grovedb_storage seem appropriate and match the usage across the file.


21-21: Introduction of TxRef
The import of TxRef aligns with the new transactional approach. It looks consistent with the broader PR objectives to ensure atomic writes.


68-68: Instantiating TxRef
Creating a transaction wrapper here is coherent. Double-check any potential performance overhead but, at first glance, this is fine.


76-76: Passing tx.as_ref()
Providing a reference into TxRef for internal calls is correct. Ensure the underlying transaction remains valid for this scope.


89-89: Ensuring consistent transaction usage
Forwarding tx.as_ref() to follow_reference is coherent; it standardizes transaction handling across lookups.


101-105: Visibility and signature update for follow_reference
Changing to pub(crate) restricts usage within the crate. Switching to &Transaction clarifies that we must have a valid reference rather than an optional transaction. This improves clarity of ownership.


136-140: Mapping invalid path errors
Rewriting Error::InvalidParentLayerPath(_) as a corrupted path ensures more precise error states. This consolidated error mapping is consistent and helps detect reference path issues.


301-304: Handling InvalidParentLayerPath as Ok(None)
This code treats an invalid parent layer path as a non-existent element. Confirm that silently returning None is the intended outcome rather than propagating an error.


334-335: Initializing TxRef in has_raw
This pattern matches other routines. No issues noted.


337-338: Transactional storage context
Acquiring a transactional context ensures writes remain atomic, aligning with the PR’s goals.


345-345: Explicit transaction parameter
Using &Transaction rather than TransactionArg clarifies mandatory transaction usage for subtree existence checks.


386-390: Instantiating transaction reference again
Ensuring consistency with the TxRef::new(...) pattern. This remains in sync with other calls.


420-424: Maintaining internal subtree validation
Same consistency for transaction usage. The approach is uniform across the file, aiding readability.

path/src/subtree_path.rs (10)

37-41: New standard library imports
These added imports (cmp, fmt, hash) are used for implementing Display, PartialOrd, and Hash.


55-82: Display implementation for SubtreePath
This approach helps debug by printing segments in both hex and UTF-8. Keep an eye on potential multi-byte or invalid UTF-8 segments. Handling the Err case from String::from_utf8 is already done gracefully with a fallback to only hex.


101-101: PartialEq with reverse iteration
Reversing the path segments ensures consistency from the leaf up. This approach is acceptable.


113-136: Custom PartialOrd for path length
Shorter paths come first, then comparing reversed segments for tie-breaking. This design might occasionally surprise, but it’s consistent with the doc comment.


137-165: PartialOrd for SubtreePathBuilder
Applying equivalent logic for the builder type. The then_with fallback ensures no false equality.


166-183: Ord implementations
Both SubtreePath and SubtreePathBuilder rely on their PartialOrd logic, guaranteeing total ordering. This is logically sound.


184-184: Eq derived consistently
Eq is correct once PartialEq is implemented. Matches the approach above.


214-214: Hash reversed vs. forward iteration
Hashing in reversed order is consistent with your comparison logic, but remain mindful that it differs from the on-path ordering.


280-280: derive_owned_with_child
Method signature is more user-friendly. It allows adding a child segment in one step. This design looks correct.


381-404: ordering() test
Comprehensively verifies partial ordering. It’s good that different-length paths are tested for correct ordering.

grovedb/src/operations/proof/generate.rs (9)

15-15: Importing GroveVersion
Used for version checks in proof operations. No issues found.


24-24: Additional references
Adds references for Element, Error, Transaction, etc. This aligns with the rest of the modifications.


90-90: Encoding proof
Using bincode::encode_to_vec for proof serialization is consistent with other usage in the file.


122-123: Transaction usage for prove_internal
Starting a transaction inside the proof generation function to ensure a consistent state is a solid approach.


137-137: Trace debug
Limited to proof_debug feature usage, helpful for dev diagnostics.


153-153: Re-running raw queries
Your approach ensures references aren’t prematurely resolved in debug mode.


172-172: prove_subqueries call
Ensures subqueries are proven recursively with an updated limit.


193-193: New function signature
Accepts a transaction reference, consistent with the rest of the PR’s transaction changes.


199-199: Handling path queries
path_query.query_items_at_path() returning an Ok(None) indicates the path wasn’t in the query. Ensure that’s truly expected.

grovedb/src/debugger.rs (10)

38-38: Importing Transaction
Preparing for the transaction references introduced below. This is aligned with your move to transactional calls for debugging.


230-230: Starting transaction in fetch_node
Ensures the debug fetch is run against a consistent transactional state. This also prevents partial updates or reads of inconsistent data.


233-233: Transition to open_transactional_merk_at_path
Replacing non-transactional opens with transactional ones is consistent with the PR’s overall direction.


253-253: fetch_root_node with transaction
Root node debugging now also benefits from consistent transactional reads.


256-256: Uniform path usage
Using SubtreePath::empty() clarifies that we’re explicitly dealing with the top-level root.


294-294: Transaction in fetch_with_path_query
Aligns with other debugging methods, ensuring that queries also run in a consistent state.


305-305: Explicit transaction usage
Passing Some(&tx)—ensuring queries are resolved in the correct context.


310-314: Serializing query results back to debug
The added lines pass tx onward so the subcall obtains the tree in a transactional manner. Consistent with overall approach.


319-319: Transaction usage in query_result_to_grovedbg
Makes sense to unify retrieval calls under a single transaction context.


333-335: Reusing open_transactional_merk_at_path
Keeps consistent with the changes above. This ensures the debug flow references data within the same transaction.

grovedb/src/replication.rs (14)

19-19: Adopting TxRef
Replacing TransactionArg references with TxRef improves clarity and aligns with the codebase changes.


172-172: Method signature changes for get_subtrees_metadata
Switching from tx: TransactionArg to a more direct transaction parameter clarifies usage.


184-185: Consistent TxRef usage
Creating a TxRef to pass around for subtree discovery is in line with the PR’s atomic approach.


187-187: Searching subtrees with transaction scope
Ensures only one open scope for subtree searches. This can reduce concurrency pitfalls.


197-221: Fetching parent Merk in a transaction
Line 205 obtains a parent_merk; lines 206–219 handle element retrieval. This chunk properly ties parent checks to the same transaction, preventing partial reads.


250-250: Refined signature for fetch_chunk
Clear naming and consistent approach to passing transactions.


265-266: Acquiring TxRef for chunk
Reuses the same pattern for chunk retrieval to ensure atomic reads.


267-267: Using root_hash(Some(tx.as_ref()))
This ensures the root hash is read in the same transaction context.


271-271: Reusing a single metadata call
Fetching subtrees before proceeding with chunk logic helps unify your approach.


279-281: Transactional Merk open
This is consistent with other parts of the code, ensuring chunk data is read transactionally.


283-285: Handle empty tree
Returning an empty vector if the tree is empty is a safe fallback.


287-297: Chunk producer usage
Produces partial proofs or states. Ensure error reporting is specific enough to debug chunking issues if they arise.


299-305: Chunk producer error
Likewise, the fallback to CorruptedData if chunk production fails. Confirm the error type is suitable for all chunk generation issues.


303-304: Consistent error message
The “Unable to create Chunk producer” message matches the approach for chunk failure.

grovedb/src/tests/sum_tree_tests.rs (7)

8-8: No issues with new import
This introduction of grovedb_storage::{Storage, StorageBatch} is straightforward and poses no concerns.


267-273: Check transaction usage scope
A new transaction is begun to open a transactional Merk. Ensure each transaction has a corresponding commit or rollback to prevent resource leaks.


357-362: Consistent transactional pattern
Starting another transaction in the same pattern as above. The approach is consistent; just confirm the transaction is safely closed at the end.


Line range hint 409-575: test_sum_tree_feature updates
This block properly uses transactions for updates and retrieval of sum tree items. The code handles concurrency well and is logically correct.


Line range hint 683-735: Propagation logic with transactions
The propagation step uses open_transactional_merk_at_path for sum tree manipulation. Everything appears safe and consistent.


Line range hint 819-892: Partial batch operations
Applying partial batches inside a transaction is fine so long as they are committed or rolled back together to preserve atomicity.


Line range hint 964-999: Complex references
These lines handle references within transactional contexts. No apparent logical or security concerns.

grovedb/src/element/insert.rs (4)

15-25: Efficient Delta struct
The Delta structure and its has_changed method are clear, making it straightforward to track modifications.


Line range hint 204-234: Conditional insertion logic
insert_if_changed_value correctly returns a delta if the existing value differs, minimizing unnecessary writes.

🧰 Tools
🪛 GitHub Check: clippy

[warning] 223-223: this expression creates a reference which is immediately dereferenced by the compiler
warning: this expression creates a reference which is immediately dereferenced by the compiler
--> grovedb/src/element/insert.rs:223:32
|
223 | Self::get_optional(&merk, key, true, grove_version)
| ^^^^^ help: change this to: merk
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow


333-402: Reference updates only when changed
insert_reference_if_changed_value extends the same pattern to references, preventing redundant operations. Good approach.

🧰 Tools
🪛 GitHub Check: clippy

[warning] 360-360: this expression creates a reference which is immediately dereferenced by the compiler
warning: this expression creates a reference which is immediately dereferenced by the compiler
--> grovedb/src/element/insert.rs:360:32
|
360 | Self::get_optional(&merk, &key, true, grove_version)
| ^^^^^ help: change this to: merk
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow


Line range hint 437-568: Subtree insertion with delta
insert_subtree_if_changed follows the same design principle. The delta-based approach helps avoid extra cost.

🧰 Tools
🪛 GitHub Check: clippy

[warning] 514-514: this expression creates a reference which is immediately dereferenced by the compiler
warning: this expression creates a reference which is immediately dereferenced by the compiler
--> grovedb/src/element/insert.rs:514:32
|
514 | Self::get_optional(&merk, &key, true, grove_version)
| ^^^^^ help: change this to: merk
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow

grovedb/src/lib.rs (3)

138-139: merk_cache module addition
Introducing merk_cache helps modularize caching logic. The approach seems aligned with transactional enhancements.


277-277: Refactor to transactional Merk usage
util::open_transactional_merk_at_path enforces atomic writes, improving correctness in multi-operation sequences.


342-345: Checkpoint creation
create_checkpoint is straightforward. Ensure usage is limited to stable states to avoid partial changes being snapshotted.

grovedb/src/reference_path.rs (2)

138-146: Path display enhancement
Displaying UTF-8 alongside hex is helpful for debugging. No functional concerns.


216-341: Absolute qualified path resolution
absolute_qualified_path thoroughly handles each path variant. Error handling is appropriately robust.

merk/src/merk/mod.rs (4)

40-40: New meta module
Adding a separate module for metadata centralizes logic. Keep module scope focused for maintainability.


48-48: Use of HashMap
Adopting HashMap can enhance performance for metadata lookups. Good choice, as long as concurrency is well managed.


257-259: meta_cache field
This local in-memory cache for metadata is useful. Confirm that commits synchronize with it, so data remains consistent upon restarts.


385-385: Root updates in commit
When updating the root key, you apply storage costs conditionally. Verify the cost logic if partial or repeated updates occur.

Also applies to: 421-421

merk/src/merk/restore.rs (7)

552-552: No issues found


577-577: No issues found


589-589: No issues found


603-603: No issues found


617-617: No issues found


631-631: No issues found


645-645: No issues found

grovedb/src/operations/delete/mod.rs (30)

14-14: No issues found

Also applies to: 17-17, 20-20, 22-22, 26-27


107-108: No issues found


114-114: No issues found


126-133: No issues found


149-157: No issues found


168-168: No issues found


190-198: No issues found


Line range hint 200-238: No issues found


240-256: No issues found


261-261: No issues found


297-298: No issues found


304-304: No issues found


329-330: No issues found


332-332: No issues found


413-414: No issues found


419-419: No issues found


438-442: No issues found


467-468: No issues found


Line range hint 501-568: No issues found


570-575: No issues found


Line range hint 579-596: No issues found


708-712: No issues found


834-834: No issues found


903-903: No issues found


1025-1025: No issues found


1146-1146: No issues found


1240-1240: No issues found


1325-1325: No issues found


1668-1672: No issues found


1737-1739: No issues found

merk/src/tree/ops.rs (6)

517-517: No issues found


523-523: No issues found


537-537: No issues found


Line range hint 726-729: No issues found


Line range hint 808-874: No issues found


Line range hint 1702-1702: No issues found

grovedb/src/element/query.rs (9)

22-22: No issues found

Also applies to: 29-29


455-456: No issues found


487-488: No issues found


493-493: No issues found


542-552: No issues found


555-566: No issues found


569-581: No issues found


584-597: No issues found


Line range hint 603-603: No issues found

grovedb/src/operations/insert/mod.rs (14)

3-3: No issues found

Also applies to: 8-8


45-45: No issues found


73-75: No issues found


79-85: No issues found


90-96: No issues found


127-128: No issues found


130-140: No issues found


152-152: No issues found


171-175: No issues found


Line range hint 187-205: No issues found


218-224: No issues found


231-236: No issues found


1955-1999: No issues found


2000-2000: No issues found

grovedb/src/operations/get/query.rs (7)

8-16: Prefer grouping all version-related imports.
Combining check_grovedb_v0, check_grovedb_v0_with_cost, and GroveVersion into a single import statement is clear. Be sure to remove any dead imports as well.


Line range hint 57-85: Check null or invalid transaction references.
You're creating a TxRef from transaction and using tx.as_ref(). Ensure callers provide a valid transaction object. If transaction can be None or invalid, handle it carefully to avoid runtime panics.


135-135: Validate query merging logic.
PathQuery::merge(path_queries.to_vec(), grove_version) is critical for correct multi-path queries. Consider verifying the returned result set or potential corner cases (e.g., empty queries, cross path merges).


Line range hint 245-269: Leverage concurrency checks around transaction usage.
Multiple queries might be launched concurrently. Ensure the TxRef usage won't cause data races, especially if multi-thread usage is possible.


Line range hint 380-456: Validate sum item references.
In query_item_value_or_sum, referencing sum items can lead to subtle logic errors if references point to other sum items. Double-check references to ensure they are properly validated, and consider test coverage for these scenarios.


Line range hint 475-540: Check transaction scoping.
When referencing TxRef multiple times (e.g., query_sums calls tx.as_ref() in multiple locations), confirm that the transaction’s lifetime usage matches your concurrency model. If the transaction is expected to remain open, confirm proper closure or commit.


Line range hint 605-664: Potential edge cases in terminal_keys).
These lines revolve around path_query.terminal_keys(max_results, grove_version). Validate edge cases (e.g., zero results, offset usage) to confirm it aligns with overall query constraints.

grovedb/src/batch/mod.rs (4)

Line range hint 60-79: Imports and references to TxRef.
Introducing TxRef in the batch logic is a step toward consistent transaction handling. Confirm that the rest of the codebase no longer references old or partially removed constructs such as open_batch_merk_at_path.


Line range hint 2252-2312: Batch consistency checks and error handling.
When you apply the batch with updated element flags, ensure that any partial application failures roll back changes. This block adds extensive error mapping—make sure it covers all failure cases to avoid partial commits.


2410-2428: Handle leftover operations carefully.
Partial batches can leave leftover operations in left_over_operations. Confirm the leftover set is indeed consistent if subsequent logic or merges fail.


2432-2450: Add-on operations.
Calling cost_based_add_on_operations can significantly alter the flow. Ensure the implementation is well-tested, especially regarding concurrency or re-entrant calls.

grovedb/src/element/exists.rs (1)

7-7: Remove or rename legacy import references.
You’ve removed GroveVersionError, which cleans up unused errors. Verify all references to older version errors are fully replaced, ensuring no residual references remain.

storage/src/rocksdb_storage.rs (1)

39-39: Removal of PrefixedRocksDbStorageContext usage.
No references remain to PrefixedRocksDbStorageContext in the code snippet. Ensure that external modules or older methods do not rely on it.

path/src/util/cow_like.rs (1)

38-38: Allow cloning for convenience.
Adding #[derive(Clone)] helps. Given this type mirrors Cow, confirm there’s no unexpected overhead if cloned frequently with large data.

grovedb/src/operations/get/worst_case.rs (1)

7-7: Removal of GroveVersionError import looks correct and consistent with the general refactoring.

No further issues spotted.

grovedb-version/src/lib.rs (5)

1-1: Refined import usage for version handling.

The updated import of GroveVersion without GroveVersionError appears consistent with the broader removal of the latter from individual files.


Line range hint 11-16: Macro error references updated to $crate::error::GroveVersionError.

Pointing the error to $crate::error keeps macro usage more robust and self-contained. No further issues noted.


Line range hint 27-31: Consistent scoping of GroveVersionError in check_grovedb_v0.

This aligns with the new error-handling approach across the codebase.


Line range hint 42-47: Refactor within check_merk_v0_with_cost ensures correct error path reference.

This maintains consistency in macro error handling.


Line range hint 58-62: Applying $crate::error::GroveVersionError reference for check_merk_v0.

All changes reinforce consistent error scoping.

merk/src/merk/meta.rs (3)

24-32: put_meta logic is straightforward and correct.

Inserting metadata into both the storage and meta_cache ensures consistency for subsequent retrievals.


34-42: delete_meta logic is clean and coherent.

Removing the entry from storage and meta_cache ensures no stale data in the cache.


45-111: Tests provide solid coverage for metadata operations.

  • Verifies retrieval, caching behavior, and deletion of metadata.
  • Good usage of OperationCost checks.
    All look thorough.
grovedb/src/error.rs (1)

56-59: New BidirectionalReferenceRule error variant introduced.

Consider adding unit tests or verifying actual usage to ensure coverage.

grovedb/src/element/serialize.rs (1)

5-5: Removal of GroveVersionError import appears clean.
This aligns with the broader initiative to simplify version-related error handling.

grovedb/src/operations/delete/worst_case.rs (4)

10-10: New usage of check_grovedb_v0_with_cost
Switching to the cost-based version checking macro is consistent with the rest of the refactor, ensuring version constraints are validated alongside cost calculations.


62-62: Ensuring consistent cost handling
The change from referencing cost to passing it by value matches the improved cost management pattern seen in this PR.


141-141: Passing cost by value
This is consistent with the removal of references for simpler error and cost handling across the codebase.


152-152: Macro call update
Maintains consistency in cost passing methodology, helping to avoid confusion or potential lifetime issues with references.

path/src/util/compact_bytes.rs (2)

68-89: Implementation of pop_segment
The function properly retrieves and removes the last segment, updating internal state and segment count. This looks correct and robust, with clear indexing.


187-206: Test coverage for pop_segment
These tests comprehensively verify the new pop functionality. They ensure that segments are popped in the correct order and handle empty state checks appropriately.

grovedb/src/batch/just_in_time_reference_update.rs (5)

56-56: Serialization within the sum item check
Passing cost by value to the macro is consistent with the new approach. Confirm that this doesn't hinder performance in tight loops.


96-96: Deferred serialization with old flags
Again, the shift to passing cost directly is coherent with the refactoring, ensuring uniformity in cost-based operations.


118-118: Split removal bytes calculation
The macro call with the updated cost reference approach helps keep cost management consistent across the codebase.


128-128: Flags update logic
Mapping errors here to MerkError::ClientCorruptionError is a clean approach that provides consistent error contexts.


148-149: Avoiding infinite loops during repeated serialization
Correctly increments the loop counter and re-serializes as needed. The cost handling changes align with the rest of the file.

grovedb/src/operations/delete/average_case.rs (2)

70-74: Validate estimated_layer_info.get() usage.
You're accessing layer_info with estimated_layer_info.get(height as u64). If layer_info is unexpectedly missing, the code gracefully returns an error. This is acceptable, but ensure that upstream logic guarantees these entries exist where intended so calls won't silently fail.


161-161: Check usage of GroveDb::add_average_case_get_merk_at_path.
Verify that the added costs from this call are appropriately handled in parent invocations. If anything is bypassing the cost container, it could lead to undercounted or inconsistent cost values.

costs/src/context.rs (2)

119-127: Commendation on the new for_ok method.
This functionality is helpful for executing side effects on successful results while preserving the original CostResult. It's a clean approach for logging or instrumentation.


206-206: Confirm the reference parameter removal impact.
Removing the reference from the macro parameter ( $cost:ident, ... ) instead of ( &$cost:ident, ... ) can affect call sites that rely on mutable references. Confirm that no downstream code was broken by this change.

grovedb-version/src/version/v1.rs (4)

42-42: Addition of get_with_value_hash version field
Ensure tests covering get_with_value_hash in the codebase correctly reference this new version field.


55-55: Check insert_reference_if_changed_value coverage.
The new version field implies new logic for reference updates. Ensure that any code path leveraging this feature is tested.


58-58: Validate insert_subtree_if_changed logic.
When used, confirm that both subtree creation and potential old subtree cleanup are tested. We want to ensure correct cost accrual and no stale references left behind.


77-77: New follow_reference_once feature
Confirm that multi-hop references (if supported) aren't mistakenly using this single follow reference logic. Also check that any relevant fallbacks exist for references that require deeper traversal.

grovedb/src/operations/delete/delete_up_tree.rs (1)

9-9: Unused import cleanup
The removal of GroveVersionError is good if no longer needed. Ensure that no other references remain, and confirm that future error handling is consistent throughout.

grovedb-version/src/version/grovedb_versions.rs (3)

51-51: Add a unit test to ensure correctness of follow_reference_once.

This new version field, follow_reference_once, suggests a unique operation mode for reference following. Consider adding or extending a test that specifically exercises and verifies the expected behavior for both existing references and references that should only be followed once.


194-194: Ensure consistent usage across codebase.

The new get_with_value_hash feature version is introduced; verify that calls to retrieve value hashes are correctly updated to reflect this new version. This consistency check helps to prevent mismatches in version-based feature flags.


210-210: Align subtree insertion logic with the new feature version.

insert_subtree_if_changed implies logic similar to insert_reference_if_changed_value. Ensure these are consistently handled and well-tested for scenarios when no changes to the subtree data are detected.

merk/src/test_utils/mod.rs (4)

314-316: Transaction parameter clarifies storage context usage.

The introduction of tx in empty_path_merk improves clarity by making explicit the transaction context. This change aligns with transactional best practices.


322-322: Validate fallback behavior when batch is None.

You’re wrapping storage.get_transactional_storage_context(...), passing Some(batch). Should you ensure that the calling code gracefully handles scenarios where a batch reference might be missing? A fallback approach or additional clarifications in the docstring can prevent confusion.


335-337: Transaction parameter clarifies read-only usage.

The transaction passing into empty_path_merk_read_only is good for read consistency. Ensure tests confirm no unintentional writes occur within this read-only context.
[approve]


343-343: Ensure read isolation.

When calling get_transactional_storage_context(...) in read-only mode, confirm that no side effects can happen if the underlying transaction is incomplete or rolled back. A quick test could catch unexpected mutations.

grovedb/src/visualize.rs (1)

44-44: Confirm safe usage of TxRef.

TxRef is used to maintain a reference to the transaction. Ensure no double-borrow or lifetime issues occur as the code evolves.

grovedb/src/tests/tree_hashes_tests.rs (6)

59-59: Transaction start is correct.

Starting a transaction before opening the Merk ensures atomic reads. This pattern is consistent with the rest of the PR.


62-64: Confirm error handling for open_transactional_merk_at_path.

Make sure the method returns a clear error if the path does not exist or if transaction context is invalid.


138-140: Ensure consistent batch usage.

Using Some(&batch) is consistent, but confirm that test scenarios cover empty batches or repeated usage across multiple paths.


181-183: Transactional approach extends to nested subtrees.

Opening a Merk under [TEST_LEAF, b"key1"] with the same transaction is correct. This ensures nested subtrees remain consistent.


234-234: Confirm transaction scope after nested insertions.

Starting another transaction mid-test can be tricky. Verify that the test doesn’t inadvertently mix transactions or overshadow the prior state.


247-249: Consistent usage of open_transactional_merk_at_path.

Reusing the same transaction reference is consistent with the overall approach. Good for ensuring atomic changes across multiple levels.

merk/src/tree/encoding.rs (1)

55-55: Validate cost usage consistency.

This macro call correctly updates the cost by passing the cost variable instead of a mutable reference. Ensure that other parts of the code consistently adhere to this approach to keep cost accounting uniform.

path/src/subtree_path_builder.rs (9)

49-56: Clone implementation is valid.

The manual clone correctly delegates to base.clone() and relative.clone(). This approach ensures that both fields are cloned recursively without omitting any references or data.


60-66: Hash ordering logic is consistent.

Hashing relative first and then base upholds the reversal-based iteration order. This is acceptable as long as the ordering is intentional and clearly documented.


Line range hint 67-75: PartialEq comparison on segments is correct.

Comparing reversed segments through reverse_iter() is a valid approach. Ensure that all code paths consistently rely on the same iteration order to avoid subtle equality mismatches.


Line range hint 77-85: Maintaining reversed-based comparison.

This PartialEq impl retains consistent reversed-segment logic. Verify that no external code depends on a forward iteration order when comparing paths.


Line range hint 87-95: Cross-type PartialEq consistency.

Allowing SubtreePathBuilder to be compared against SubtreePath fosters flexibility. The reversed iteration approach remains uniform, preventing equality conflicts.


97-97: Implementation of Eq is safe.

Since PartialEq is fully defined and no additional fields can undermine symmetry, reflexivity, or transitivity, this empty Eq implementation is valid.


161-181: Ownership-handling methods look solid.

The owned_from_iter and owned_from_path methods properly construct a fully owned SubtreePathBuilder backed by CompactBytes. This design helps decouple lifetimes from the original references.


193-210: Push-segment logic is well-structured.

Transitioning from Empty or Single into Multi ensures consistent management of path segments. This approach simplifies code and prevents dropping an existing segment by accident.


243-273: Parent derivation approach is correct.

derive_parent_owned carefully handles different states (empty, single, or multi) while preserving owned data. The fallback to base.derive_parent() covers edge cases gracefully.

grovedb/src/element/get.rs (7)

7-10: Improved import structure.

Removing conditional flags for Merk and other items simplifies the code and aids maintainability. Confirm there are no unintended side effects, particularly in minimal or non-default build configurations.

Also applies to: 12-13, 16-19


71-71: Consistent usage of cost_return_on_error_no_add!.

The macro call ensures that no additional cost is appended upon error. This usage is aligned with other parts of the file, maintaining uniform error handling.


130-130: Optional decode with direct error.

Deserialization errors are propagated immediately, preventing partial reads and preserving code clarity. This approach is good for avoiding silent data corruption.


141-141: Graceful element deserialization.

Returning an error when the element is not deserializable adheres to fail-fast principles, reducing the chance of corrupted states. Keep track of potential repeated errors for debugging.


218-218: Error handling retains cost context.

Using a separate cost_return_on_error! ensures that any error raised during reference conversion also updates the cost properly, enhancing debugging.


251-290: New get_with_value_hash method is robust.

The method retrieves both the element and its hash from Merk, returning a PathKeyNotFound error if absent. This strengthens consistency with the rest of the API and streamlines retrieval of hashed data.


305-306: Transactional context usage is appropriate.

Acquiring a transactional storage context is a safe approach for concurrency control. Ensure that commits and rollbacks are properly tested to avoid partial writes or data inconsistencies.

Also applies to: 308-308, 328-328, 333-333

merk/src/tree/walk/mod.rs (4)

233-233: Refined error handling macro usage.

Passing cost (instead of &mut cost) to the macro remains consistent with the updated cost approach. Confirm that the macro expansions correctly propagate any cost updates.


278-278: Cost injection check.

Again, verifying that the cost is accurately tracked in the updated macro is essential. The function call’s logic appears correct.


324-324: Preserving cost updates in put_value_and_reference_value_hash.

Ensuring consistent macro usage prevents cost-mismatch bugs. The approach is sound.


371-371: Uniform macro usage promotes consistency.

Maintaining the same convention (cost_return_on_error_no_add!(cost, ...)) across all put methods helps unify code style and error handling patterns.

storage/src/storage.rs (36)

46-47: Kudos on the improved documentation.
These lines add clarity to the top-level storage trait, helping future contributors understand how to work with and extend the storage layer.


103-103: Ensure consistent naming of “worst case” context creation.
The documentation references “storage context creation.” Confirm throughout the codebase that references to any “worst case” or “storage context” cost remain consistent for clarity.


116-116: Clear explanation of raw iterator usage.
Retaining a dedicated type for raw iteration fosters a clean separation of concerns. No issues detected.


120-120: Parameter usage is consistent with parent trait definitions.
No potential pitfalls spotted with this method.


129-129: Auxiliary storage function well-defined.
All logic around “put_aux” aligns with the overarching trait structure.


137-137: Consistent naming improves clarity.
The method name put_root is straightforward, reflecting its goal of inserting into tree roots storage.


145-145: Metadata handling is coherent.
Adding separate operations for metadata fosters maintainability. No further comments.


153-153: Delete method is consistent with put usage.
Maintaining symmetry between put/delete operations is recommended. No issues found.


160-160: Auxiliary delete method is consistent.
No potential bugs spotted.


167-167: Separation of root deletion is logical.
Clear naming and intent.


174-174: Deleting metadata entry is well-defined.
Approach is consistent with the rest of the trait.


181-181: Data retrieval from the main storage region.
The return type remains a CostResult<Option<Vec<u8>>, Error>, which is consistent and robust.


184-184: Auxiliary key retrieval logic.
This method adheres to the same result pattern, ensuring consistency.


187-187: Root retrieval aligns with the put_root usage.
No performance or logical concerns here.


190-190: Metadata retrieval is aligned with other retrieval methods.
Continue to ensure that the optional return is handled properly at the call site.


199-199: Method naming is consistent.
raw_iter effectively conveys a direct iteration approach. No issues found.


214-214: Batch put_aux method is symmetrical with the trait design.
No concerns at the moment.


223-223: New put_root in batch.
Clear separation of root-based operations from standard data operations helps prevent confusion.


234-234: Delete operation for aux storage.
Remains consistent with put/delete naming across the codebase.


238-238: Subtree roots deletion in batch.
This naming scheme is unambiguous.


242-242: Introduction of RawIterator doc.
This clarifies the iteration over stored records. Looks good.


272-272: Refactoring of batch structure is well-labeled.
This doc line clarifies the role of the StorageBatch.


308-309: Descriptive naming of len method.
Straightforward approach to counting operations. The added docs help maintain clarity.


317-320: is_empty method provides clarity.
This method is a convenient check before committing or discarding a batch.


341-341: put_aux now has separate doc.
Keeping consistent naming for “aux” operations is beneficial.


358-358: Clearly labeled put_root operation.
Matches the approach used for data/aux storage, no issues.


375-375: Dedicated metadata insertion method.
This design unifies the approach for different data categories.


403-403: delete_aux doc clarifies auxiliary storage usage.
No immediate suggestions.


414-414: delete_root doc.
Maintaining a consistent naming pattern fosters better maintainability.


425-425: delete_meta doc is consistent.
No concerns noted.


476-476: Iterator’s doc block is well-structured.
Reflects the iteration strategy across different operation sets.


517-517: AbstractBatchOperation doc improvement.
Better enumeration doc fosters clarity when handling deferred operations.


529-529: PutAux doc is consistent with naming.
No action needed.


535-535: Root storage addition.
Clear labeling. No concerns.


541-541: Metadata put operation doc
Maintains alignment with the rest of the enum.


552-552: DeleteAux doc.
All categories of storage appear thoroughly documented.

grovedb/src/batch/estimated_costs/worst_case_costs.rs (3)

195-195: Validate presence in self.paths.
The code returns an error if path is missing from the cache. This is crucial for preventing untracked updates. The approach is appropriate.


207-207: add_worst_case_get_merk_at_path usage.
Passing &mut cost for in-place modifications is consistent with the broader cost handling pattern. Implementation looks correct.


250-250: add_worst_case_get_merk_at_path usage for base path.
Similar pattern ensures we handle the base path consistently if it hasn’t been cached.

grovedb/src/estimated_costs/worst_case_costs.rs (4)

21-21: Version checking improvements.
Ensuring backward compatibility with older GroveDB versions is crucial for external integrators.


222-222: Use of cost_return_on_error_no_add! for value.serialized_size.
This ensures that if serialization fails, cost accumulation is halted early. Looks correct.


290-290: cost_return_on_error_no_add! in replace scenario.
Ensuring errors in serialization do not add unnecessary costs is consistent with the code’s approach.


332-332: Apply the same approach for patch operations.
Handling errors preemptively prevents inaccurate or inflated cost metrics.

merk/src/proofs/tree.rs (7)

382-382: Refined error handling for op.
Switching to cost_return_on_error_no_add!(cost, op) ensures that an invalid operation won’t inflate cost metrics.


385-386: Pairwise pop from stack.
The error check for potential underflow is correct. The macro usage is consistent with the cost approach in the codebase.


403-404: Similar pop logic for the Child branch.
Reusing the same approach keeps code consistent and reduces duplication.


421-422: ParentInverted case preserves the same pattern.
Great to see consistent pop error checks.


439-440: ChildInverted operation.
Aligns with the rest of the code’s approach to attach children conditionally, then push back the parent.


473-473: Visit node call with cost_return_on_error_no_add!.
This ensures the function halts on error without incorrectly adding further cost.


496-496: Maintains consistent key ordering checks.
Necessary to ensure proof correctness in the inverted scenario.

storage/src/rocksdb_storage/storage.rs (11)

47-47: Imports aligned with transactional approach
The added import statement for transaction-based contexts is consistent with the shift to transactional storage usage.


191-191: Consistent cost propagation strategy
Passing cost instead of &cost in cost_return_on_error_no_add! aligns with the recent macro usage changes for clearer cost accumulation.


210-210: Maintain uniform usage of cost variables
This change ensures consistency when accumulating storage costs using the updated macro pattern.


231-231: Correct approach to optional cost_info
Continuing the new pattern of directly referencing cost supports consistent macro usage for optional cost tracking.


251-251: Enhanced clarity in cost handling
The direct usage of cost rather than a reference improves readability of error returns within macro calls.


302-302: Consistent pattern in cost tracking
Adopting the updated macro usage ensures the cost variable is treated uniformly across all batch delete operations.


330-330: Maintaining the same retrieval pattern for root
Similar retrieval logic for root deletion ensures we stay compatible with the new macro usage and cost model.


360-360: Uniform approach to meta CF removal
Applying the same removal logic and cost tracking macro usage keeps the system’s cost calculations consistent.


581-586: Transactional test usage
These lines transition the test to use get_transactional_storage_context with a proper transaction. Ensures consistent coverage for transactional behavior.


603-608: Ensuring coherence in repeated context usage
Similarly updated to use transactions in the test, reinforcing consistent usage of batch context.


649-652: Verifying final iteration cost consistency
These lines confirm that changes in a separate subtree do not affect iteration costs. The test approach using transactional context is well-structured.

grovedb/src/estimated_costs/average_case_costs.rs (8)

19-19: Version consistency check
Importing check_grovedb_v0 utilities ensures version handling remains consistent with the project’s approach to feature gating.


86-86: Improved flags size estimation
Updates to pass cost directly to the macro reflect the streamlined error/return pattern. No issues found.


174-174: No-add macro usage for layered flags
Maintains consistency with other cost calculations. The approach is sound.


236-236: Insertion size measurement
Switching from &cost to cost in the macro call follows the revised design for average case accounting.


297-297: Tree replacement cost
Direct cost usage in the macro improves clarity and reduces overhead.


310-310: Same-size item replacement
Ensuring cost is captured without references is consistent with changes throughout the file.


352-352: Patch operation cost handling
Properly wrapping cost_return_on_error_no_add! with a direct cost variable aids maintainability.


394-394: Delete element average case cost
The direct usage of cost matches the updated macro pattern, improving code uniformity.

grovedb/src/batch/estimated_costs/average_case_costs.rs (4)

199-199: Graceful handling of missing paths
Throws an error if paths.get(path) is absent. Properly wrapped with the updated macro for cost error.


221-221: Structured retrieval of layer info
Ensures the path is found before proceeding. Maintains the same cost macro usage for error handling.


236-236: Average case merk retrieval
Adds average-case cost for retrieving the Merk tree. The direct cost approach is consistent with new macro semantics.


275-275: Base path check
Proactively calculates cost for retrieving the base path's Merk. Reflects similar macro usage updates.

storage/src/rocksdb_storage/tests.rs (9)

558-558: Initiating multi-context batch
Using storage batch with get_transactional_storage_context ensures an atomic approach for CF methods.


560-563: Obtaining transactional context
Fetching the new contexts underscores the updated design for batch-based queries.


615-618: Post-commit checks
Queries confirm data is now visible after commit_multi_context_batch. Adequately tests transactional isolation.


Line range hint 649-652: Verifying final read in a new transaction
The new usage of transactional context reaffirms data visibility only after commit.


702-705: DB batch usage in no-transaction context
Shows consistent staging of put operations and deferring actual commit. The code is correct.


791-792: Starting separate transactions
Maintains the isolation between transaction and other_tx. This approach is correct.


794-797: Ensuring separate transaction contexts
Data in one transaction remains invisible to the other, validating concurrency model.


1040-1045: Check ephemeral changes
These lines confirm that uncommitted data is not visible in a new transaction.


1061-1066: Validate data after commit
Re-establishing a new transaction to confirm data is now persisted. Good test coverage.

merk/src/tree/mod.rs (5)

719-719: Use of cost_return_on_error_no_add!() with direct cost is consistent.
The updated macro invocation properly matches the pattern established in other areas, avoiding the previous borrowing issues.


775-775: Consistent direct passing of cost to the macro.
No issues found; the code aligns with the new error handling approach.


829-829: Maintains consistency with refactored cost handling.
This invocation is in line with the updated refactoring for improved readability and consistency.


891-891: Ensures uniformity in the cost_return_on_error_no_add!() usage.
The direct passing of cost prevents confusion around borrowing. This is a good improvement in code clarity.


974-974: Refactored macro call preserves clarity in error handling.
Retaining identical patterns for passing cost is desirable for maintainability.

grovedb/src/tests/mod.rs (12)

117-118: Improved clarity of partitioned keys.
Documenting the hierarchy helps new contributors quickly grasp the test's intended structure.


123-123: Self-documenting approach.
Naming keys distinctly helps ensure future maintainers understand the branching logic.


127-129: Further consistency in naming.
These additional key-value pairs maintain the pattern, which is important for test coverage of deeper branches.


131-133: Same structured naming for deep subtree.
Retaining a clear key progression (like key4, key5, key6) reduces confusion.


136-138: Incremental naming for deeper references.
These references (key7, key8, key9) demonstrate multiple layers of the test tree.


140-141: Preservation of numeric order for subtrees.
Makes debugging location-based issues easier.


143-145: Added clarity for key12, key13, key14.
This completes the extended tree dimension in the tests.


3121-3121: Explicit transaction start.
This aligns with the new atomic commit requirements and ensures changes are properly tracked.


3125-3125: Transaction-based storage context retrieval logic.
Ensures that subsequent operations are guaranteed to be atomic.


3214-3215: Transaction creation matches new approach.
Consistency in how transactions are introduced throughout the code is beneficial.


3216-3218: Refined usage of db field under transaction.
This retains clarity about which database reference is used during concurrency.


3930-3930: Wiping storage within test.
Helps verify the full teardown logic. This ensures test isolation but double check that it doesn’t interfere with other parallel tests.

grovedb/src/operations/is_empty_tree.rs (4)

7-7: Introduced TxRef for transaction management.
This improves readability over manually handling transaction references.


28-29: Instantiation of TxRef ensures code consistency.
The approach to referencing the database + transaction is in line with other refactored methods.


32-36: Verifying subtree existence with the new transaction reference.
Allows the call to remain consistent with the new TxRef argument usage.


39-44: Opening transactional MERK with TxRef.
Using a direct call instead of macros clarifies the transaction-handling flow.

merk/src/test_utils/temp_merk.rs Show resolved Hide resolved
grovedb/src/merk_cache.rs Show resolved Hide resolved
grovedb/src/merk_cache.rs Show resolved Hide resolved
grovedb/src/operations/proof/generate.rs Show resolved Hide resolved
grovedb/src/batch/mod.rs Show resolved Hide resolved
grovedb/src/tests/mod.rs Show resolved Hide resolved
@fominok fominok self-assigned this Dec 27, 2024
@fominok fominok mentioned this pull request Dec 27, 2024
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant